Post-hoc explanation methods are used with the intent of providing insights about neural networks and are sometimes said to help engender trust in their outputs. However, popular explanations methods have been found to be fragile to minor perturbations of input features or model parameters. Relying on constraint relaxation techniques from non-convex optimization, we develop a method that upper-bounds the largest change an adversary can make to a gradient-based explanation via bounded manipulation of either the input features or model parameters. By propagating a compact input or parameter set as symbolic intervals through the forwards and backwards computations of the neural network we can formally certify the robustness of gradient-based explanations. Our bounds are differentiable, hence we can incorporate provable explanation robustness into neural network training. Empirically, our method surpasses the robustness provided by previous heuristic approaches. We find that our training method is the only method able to learn neural networks with certificates of explanation robustness across all six datasets tested.
translated by 谷歌翻译
Neural network interpretation methods, particularly feature attribution methods, are known to be fragile with respect to adversarial input perturbations. To address this, several methods for enhancing the local smoothness of the gradient while training have been proposed for attaining \textit{robust} feature attributions. However, the lack of considering the normalization of the attributions, which is essential in their visualizations, has been an obstacle to understanding and improving the robustness of feature attribution methods. In this paper, we provide new insights by taking such normalization into account. First, we show that for every non-negative homogeneous neural network, a naive $\ell_2$-robust criterion for gradients is \textit{not} normalization invariant, which means that two functions with the same normalized gradient can have different values. Second, we formulate a normalization invariant cosine distance-based criterion and derive its upper bound, which gives insight for why simply minimizing the Hessian norm at the input, as has been done in previous work, is not sufficient for attaining robust feature attribution. Finally, we propose to combine both $\ell_2$ and cosine distance-based criteria as regularization terms to leverage the advantages of both in aligning the local gradient. As a result, we experimentally show that models trained with our method produce much more robust interpretations on CIFAR-10 and ImageNet-100 without significantly hurting the accuracy, compared to the recent baselines. To the best of our knowledge, this is the first work to verify the robustness of interpretation on a larger-scale dataset beyond CIFAR-10, thanks to the computational efficiency of our method.
translated by 谷歌翻译
社交媒体的日益普及引起了人们对儿童在线安全的关注。未成年人与具有掠夺性意图的成年人之间的互动是一个特别严重的关注点。在线性修饰的研究通常依靠领域专家来手动注释对话,从而限制了规模和范围。在这项工作中,我们测试了良好的方法如何检测对话行为并取代专家的人类注释。在在线修饰的心理理论中,我们将$ 6772的$ 6772 $聊天消息标记为儿童性犯罪者以十一种掠夺性行为之一发送的聊天消息。我们训练字袋和自然语言推断模型来对每种行为进行分类,并表明,最佳性能模型以一致但不与人类注释的方式分类的方式对行为进行了分类。
translated by 谷歌翻译
部署AI驱动的系统需要支持有效人类互动的值得信赖的模型,超出了原始预测准确性。概念瓶颈模型通过在类似人类的概念的中间级别调节分类任务来促进可信度。这使得人类干预措施可以纠正错误预测的概念以改善模型的性能。但是,现有的概念瓶颈模型无法在高任务准确性,基于概念的强大解释和对概念的有效干预措施之间找到最佳的妥协,尤其是在稀缺完整和准确的概念主管的现实情况下。为了解决这个问题,我们提出了概念嵌入模型,这是一种新型的概念瓶颈模型,它通过学习可解释的高维概念表示形式而超出了当前的准确性-VS解关性权衡。我们的实验表明,嵌入模型(1)达到更好或竞争性的任务准确性W.R.T. W.R.T.没有概念的标准神经模型,(2)提供概念表示,以捕获有意义的语义,包括其地面真相标签,(3)支持测试时间概念干预措施,其在测试准确性中的影响超过了标准概念瓶颈模型,以及(4)规模对于稀缺的完整概念监督的现实条件。
translated by 谷歌翻译
本文考虑了从野外单视图像中无监督的3D对象重建的问题。由于歧义性和内在的不良性,这个问题本质上难以解决,因此需要强大的正则化以实现不同潜在因素的分离。与现有的作品将明确的正规化引入目标功能不同,我们研究了一个不同的空间进行隐式正则化 - 潜在空间的结构。具体而言,我们限制了潜在空间的结构,以捕获潜在因素的拓扑因果排序(即代表因果关系作为定向无环形图)。我们首先表明,不同的因果顺序对于3D重建至关重要,然后探索几种方法以找到与任务有关的因果因素排序。我们的实验表明,潜在空间结构确实是隐式正规化,并引入了有益于重建的电感偏见。
translated by 谷歌翻译
用于训练机器学习(ML)模型的标签至关重要。通常,对于ML分类任务,数据集包含硬标签,但已证明使用软标签的学习可以产生模型概括,鲁棒性和校准的好处。较早的工作发现从多个注释者的硬标签形成软标签方面的成功;但是,这种方法可能不会融合到最佳标签,因此需要许多注释者,这可能是昂贵且效率低下的。我们专注于有效地从单个注释者那里引起软标签。我们通过众包研究($ n = 242 $)收集并发布了CIFAR-10的软标签数据集。我们证明,使用标签学习可以实现可比的模型性能与先前的方法,同时需要更少的注释者。因此,我们的启发方法表明,有望使从业者能够通过更少的注释来享受改善模型性能和可靠性的好处,并为将来的数据集策展人提供指南,以了解从单个注释者那里利用更丰富信息(例如分类不确定性)的好处。
translated by 谷歌翻译
研究深度学习的鲁棒性的一个主要挑战是定义了给定神经网络(NN)不变的``毫无意义''扰动集。关于鲁棒性的大多数工作隐含地将人作为参考模型来定义这种扰动。我们的工作通过使用另一个参考NN来定义给定的NN应该不变,从而使对任何NN的依赖概述对任何NN的依赖。这使得衡量鲁棒性等同于衡量两个NN共享不稳定的程度,我们提出了一种称为搅拌的措施。搅拌重新调整现有的表示相似性措施,使其适合衡量共享的不稳定。使用我们的度量,我们能够深入了解共享的不断增长,随着重量初始化,体系结构,损失功能和培训数据集的变化如何变化。我们的实现可在:\ url {https://github.com/nvedant07/stir}中获得。
translated by 谷歌翻译
机器学习(ML)从业人员越来越多地承担着与非技术专家的价值观和目标保持一致的模型。但是,关于从业人员如何将域专业知识转化为ML更新的考虑不足。在本文中,我们考虑如何系统地捕获从业者和专家之间的互动。我们设计了一种分类法,以将专家反馈类型与从业者更新相匹配。从业者可以从观察或域级别的专家那里收到反馈,并将此反馈转换为数据集,损耗函数或参数空间的更新。我们回顾了ML和人类计算机互动中的现有工作,以描述这种反馈更高的分类法,并强调了不足以纳入非技术专家的反馈意见。我们以一系列的开放问题结尾,这些问题自然而然地源于我们提议的分类法和随后的调查。
translated by 谷歌翻译
Conformal prediction (CP) is a wrapper around traditional machine learning models, giving coverage guarantees under the sole assumption of exchangeability; in classification problems, for a chosen significance level $\varepsilon$, CP guarantees that the error rate is at most $\varepsilon$, irrespective of whether the underlying model is misspecified. However, the prohibitive computational costs of "full" CP led researchers to design scalable alternatives, which alas do not attain the same guarantees or statistical power of full CP. In this paper, we use influence functions to efficiently approximate full CP. We prove that our method is a consistent approximation of full CP, and empirically show that the approximation error becomes smaller as the training set increases; e.g., for $10^{3}$ training points the two methods output p-values that are $<10^{-3}$ apart: a negligible error for any practical application. Our methods enable scaling full CP to large real-world datasets. We compare our full CP approximation (ACP) to mainstream CP alternatives, and observe that our method is computationally competitive whilst enjoying the statistical predictive power of full CP.
translated by 谷歌翻译
人工智能(AI)的应用范围是巨大的,危害可能性也是如此。越来越愤怒地对来自AI系统的潜在风险产生了刺激行动,以解决这些风险,同时侵蚀对AI系统的信心以及发展它们的组织。 2019年研究发现了80多个出版和采用了“AI伦理原则”的组织,从此加入了更多。但原则往往会在“什么”和“如何”之间的差距和“如何”的差距。这样的差距已经启用可疑或道德可疑的行为,这促进了特定组织的可信度,更广泛地。因此,迫切需要允许AI开发人员防止伤害的具体方法,并允许他们通过可验证行为来证明其可靠性。下面,我们探索机制(从ARXIV:2004.07213绘制)创建一个生态系统,即AI开发人员可以获得信任 - 如果他们值得信赖。更好地评估开发商可信度,可以为用户选择,员工行动,投资决策,法律追索和新兴治理提供信息。制度。
translated by 谷歌翻译